Curriculum learning and self-paced learning are the training strategies that gradually feed the samples from easy to more complex. They have captivated increasing attention due to their excellent performance in robotic vision. Most recent works focus on designing curricula based on difficulty levels in input samples or smoothing the feature maps. However, smoothing labels to control the learning utility in a curriculum manner is still unexplored. In this work, we design a paced curriculum by label smoothing (P-CBLS) using paced learning with uniform label smoothing (ULS) for classification tasks and fuse uniform and spatially varying label smoothing (SVLS) for semantic segmentation tasks in a curriculum manner. In ULS and SVLS, a bigger smoothing factor value enforces a heavy smoothing penalty in the true label and limits learning less information. Therefore, we design the curriculum by label smoothing (CBLS). We set a bigger smoothing value at the beginning of training and gradually decreased it to zero to control the model learning utility from lower to higher. We also designed a confidence-aware pacing function and combined it with our CBLS to investigate the benefits of various curricula. The proposed techniques are validated on four robotic surgery datasets of multi-class, multi-label classification, captioning, and segmentation tasks. We also investigate the robustness of our method by corrupting validation data into different severity levels. Our extensive analysis shows that the proposed method improves prediction accuracy and robustness.
translated by 谷歌翻译
Accurate airway extraction from computed tomography (CT) images is a critical step for planning navigation bronchoscopy and quantitative assessment of airway-related chronic obstructive pulmonary disease (COPD). The existing methods are challenging to sufficiently segment the airway, especially the high-generation airway, with the constraint of the limited label and cannot meet the clinical use in COPD. We propose a novel two-stage 3D contextual transformer-based U-Net for airway segmentation using CT images. The method consists of two stages, performing initial and refined airway segmentation. The two-stage model shares the same subnetwork with different airway masks as input. Contextual transformer block is performed both in the encoder and decoder path of the subnetwork to finish high-quality airway segmentation effectively. In the first stage, the total airway mask and CT images are provided to the subnetwork, and the intrapulmonary airway mask and corresponding CT scans to the subnetwork in the second stage. Then the predictions of the two-stage method are merged as the final prediction. Extensive experiments were performed on in-house and multiple public datasets. Quantitative and qualitative analysis demonstrate that our proposed method extracted much more branches and lengths of the tree while accomplishing state-of-the-art airway segmentation performance. The code is available at https://github.com/zhaozsq/airway_segmentation.
translated by 谷歌翻译
Purpose: Surgery scene understanding with tool-tissue interaction recognition and automatic report generation can play an important role in intra-operative guidance, decision-making and postoperative analysis in robotic surgery. However, domain shifts between different surgeries with inter and intra-patient variation and novel instruments' appearance degrade the performance of model prediction. Moreover, it requires output from multiple models, which can be computationally expensive and affect real-time performance. Methodology: A multi-task learning (MTL) model is proposed for surgical report generation and tool-tissue interaction prediction that deals with domain shift problems. The model forms of shared feature extractor, mesh-transformer branch for captioning and graph attention branch for tool-tissue interaction prediction. The shared feature extractor employs class incremental contrastive learning (CICL) to tackle intensity shift and novel class appearance in the target domain. We design Laplacian of Gaussian (LoG) based curriculum learning into both shared and task-specific branches to enhance model learning. We incorporate a task-aware asynchronous MTL optimization technique to fine-tune the shared weights and converge both tasks optimally. Results: The proposed MTL model trained using task-aware optimization and fine-tuning techniques reported a balanced performance (BLEU score of 0.4049 for scene captioning and accuracy of 0.3508 for interaction detection) for both tasks on the target domain and performed on-par with single-task models in domain adaptation. Conclusion: The proposed multi-task model was able to adapt to domain shifts, incorporate novel instruments in the target domain, and perform tool-tissue interaction detection and report generation on par with single-task models.
translated by 谷歌翻译
胃肠道内窥镜手术(GES)对仪器的大小和远端灵巧性有很高的要求,因为内窥镜通道狭窄和曲折的人类胃肠道。本文利用镍钛(NITI)电线来开发微型3-DOF(俯仰 - 翻译)柔性平行机器人手腕(FPRW)。此外,我们在手腕的连接界面上组装了一把电刀,然后对其进行了毛细管,以在猪胃中进行内窥镜粘膜下清扫术(ESD)。每个ESD工作流程中的有效性能证明了设计的FPRW具有足够的工作空间,高远端灵量和高定位精度。
translated by 谷歌翻译
手术字幕在手术指导预测和报告生成中起重要作用。但是,大多数字幕模型仍然依赖重量计算对象检测器或特征提取器来提取区域特征。此外,检测模型需要其他边界框注释,这是昂贵的,需要熟练的注释器。这些导致推断延迟,并限制字幕模型在实时机器人手术中部署。为此,我们通过利用基于贴片的移位窗口技术来设计端到端检测器和功能无提取器字幕模型。我们建议以更快的推理速度和更少的计算,建议基于窗口的多层感知器变压器字幕模型(SWINMLP-TRANCAP)。 SwinMLP-Trancap用基于窗口的多头MLP代替了多头注意模块。这样的部署主要集中在图像理解任务上,但是很少有工作研究标题生成任务。 Swinmlp-trancap还扩展到视频版本,用于使用3D补丁和Windows的视频字幕任务。与以前的基于检测器或基于特征提取器的模型相比,我们的模型在维护两个手术数据集上的性能的同时,大大简化了体系结构设计。该代码可在https://github.com/xumengyaamy/swinmlp_trancap上公开获得。
translated by 谷歌翻译
数据多样性和数量对于培训深度学习模型的成功至关重要,而在医学成像领域,数据收集和注释的难度和成本尤其巨大。特别是在机器人手术方面,数据稀缺性和失衡严重影响了模型的准确性,并限制了基于深度学习的手术应用(例如手术仪器分割)的设计和部署。考虑到这一点,在本文中,我们重新考虑了手术仪器分割任务,并提出了一种一对多的数据生成解决方案,该解决方案摆脱了复杂且昂贵的数据收集过程和机器人手术的注释。在我们的方法中,我们仅利用单个手术背景组织图像和一些开源仪器图像作为种子图像,并应用多种增强和混合技术来合成大量图像变化。此外,我们还引入了训练期间链式的增强混合,以进一步增强数据多样性。在Endovis-2018和Endovis-2017手术场景分割的真实数据集中评估了所提出的方法。我们的经验分析表明,如果没有高度的数据收集和注释成本,我们就可以实现不错的手术仪器分割性能。此外,我们还观察到我们的方法可以处理部署领域中的新仪器预测。我们希望我们的鼓舞人心的结果能够鼓励研究人员强调以数据为中心的方法,以克服除数据短缺(例如类不平衡,域适应性和增量学习)之外的深度学习限制。
translated by 谷歌翻译
手术中的视觉问题回答(VQA)在很大程度上没有探索。专家外科医生稀缺,经常被临床和学术工作负载超负荷。这种超负荷通常会限制他们从患者,医学生或初级居民与手术程序有关的时间回答问卷。有时,学生和初级居民也不要在课堂上提出太多问题以减少干扰。尽管计算机辅助的模拟器和过去的手术程序记录已经可以让他们观察和提高技能,但他们仍然非常依靠医学专家来回答他们的问题。将手术VQA系统作为可靠的“第二意见”可以作为备份,并减轻医疗专家回答这些问题的负担。缺乏注释的医学数据和特定于域的术语的存在限制了对手术程序的VQA探索。在这项工作中,我们设计了一项外科VQA任务,该任务根据外科手术场景回答有关手术程序的问卷。扩展MICCAI内窥镜视觉挑战2018数据集和工作流识别数据集,我们介绍了两个具有分类和基于句子的答案的手术VQA数据集。为了执行手术VQA,我们采用视觉文本变压器模型。我们进一步介绍了一个基于MLP的剩余Visualbert编码器模型,该模型可以在视觉令牌和文本令牌之间进行相互作用,从而改善了基于分类的答案的性能。此外,我们研究了输入图像贴片数量和时间视觉特征对分类和基于句子的答案中模型性能的影响。
translated by 谷歌翻译
Context-aware decision support in the operating room can foster surgical safety and efficiency by leveraging real-time feedback from surgical workflow analysis. Most existing works recognize surgical activities at a coarse-grained level, such as phases, steps or events, leaving out fine-grained interaction details about the surgical activity; yet those are needed for more helpful AI assistance in the operating room. Recognizing surgical actions as triplets of <instrument, verb, target> combination delivers comprehensive details about the activities taking place in surgical videos. This paper presents CholecTriplet2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos. The challenge granted private access to the large-scale CholecT50 dataset, which is annotated with action triplet information. In this paper, we present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge. A total of 4 baseline methods from the challenge organizers and 19 new deep learning algorithms by competing teams are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%. This study also analyzes the significance of the results obtained by the presented approaches, performs a thorough methodological comparison between them, in-depth result analysis, and proposes a novel ensemble method for enhanced recognition. Our analysis shows that surgical workflow analysis is not yet solved, and also highlights interesting directions for future research on fine-grained surgical activity recognition which is of utmost importance for the development of AI in surgery.
translated by 谷歌翻译
在这封信中提出了一种新的基于触诊的切口检测策略,潜在地用于机器人气管术。引入触觉传感器以通过轻轻接触测量特定喉部区域中的组织硬度。提出了内核融合方法以将平方指数(SE)内核与ornstein-uhlenbeck(OU)内核组合,以弄清楚现有内核功能在这种情况下的缺点是不够最佳的。此外,我们进一步规则化探索因子和贪婪因子,并且触觉传感器的移动距离和机器人基准的旋转角度在切口定位过程中被认为是采集策略中的新因素。我们进行了模拟和物理实验,以比较新提出的算法 - 重新分配采集策略与热气检测中的能量限制(RASEC),具有当前的触诊的采集策略。结果表明,具有融合内核的建议采集策略可以通过最高算法性能成功定位切口(平均精度0.932,平均召回0.973,平均F1得分0.952)。在机器人触发过程中,累积移动距离减少了50%,累积旋转角度减少了71.4%,没有牺牲在综合性能能力中。因此,证明RASEC可以有效地表明喉部区域中的切割区域,大大降低了能量损失。
translated by 谷歌翻译
Masked image modeling (MIM) performs strongly in pre-training large vision Transformers (ViTs). However, small models that are critical for real-world applications cannot or only marginally benefit from this pre-training approach. In this paper, we explore distillation techniques to transfer the success of large MIM-based pre-trained models to smaller ones. We systematically study different options in the distillation framework, including distilling targets, losses, input, network regularization, sequential distillation, etc, revealing that: 1) Distilling token relations is more effective than CLS token- and feature-based distillation; 2) An intermediate layer of the teacher network as target perform better than that using the last layer when the depth of the student mismatches that of the teacher; 3) Weak regularization is preferred; etc. With these findings, we achieve significant fine-tuning accuracy improvements over the scratch MIM pre-training on ImageNet-1K classification, using all the ViT-Tiny, ViT-Small, and ViT-base models, with +4.2%/+2.4%/+1.4% gains, respectively. Our TinyMIM model of base size achieves 52.2 mIoU in AE20K semantic segmentation, which is +4.1 higher than the MAE baseline. Our TinyMIM model of tiny size achieves 79.6% top-1 accuracy on ImageNet-1K image classification, which sets a new record for small vision models of the same size and computation budget. This strong performance suggests an alternative way for developing small vision Transformer models, that is, by exploring better training methods rather than introducing inductive biases into architectures as in most previous works. Code is available at https://github.com/OliverRensu/TinyMIM.
translated by 谷歌翻译